Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Front Radiol ; 3: 1223377, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37886239

RESUMO

Purpose: To develop a deep learning-based method to retrospectively quantify T2 from conventional T1- and T2-weighted images. Methods: Twenty-five subjects were imaged using a multi-echo spin-echo sequence to estimate reference prostate T2 maps. Conventional T1- and T2-weighted images were acquired as the input images. A U-Net based neural network was developed to directly estimate T2 maps from the weighted images using a four-fold cross-validation training strategy. The structural similarity index (SSIM), peak signal-to-noise ratio (PSNR), mean percentage error (MPE), and Pearson correlation coefficient were calculated to evaluate the quality of network-estimated T2 maps. To explore the potential of this approach in clinical practice, a retrospective T2 quantification was performed on a high-risk prostate cancer cohort (Group 1) and a low-risk active surveillance cohort (Group 2). Tumor and non-tumor T2 values were evaluated by an experienced radiologist based on region of interest (ROI) analysis. Results: The T2 maps generated by the trained network were consistent with the corresponding reference. Prostate tissue structures and contrast were well preserved, with a PSNR of 26.41 ± 1.17 dB, an SSIM of 0.85 ± 0.02, and a Pearson correlation coefficient of 0.86. Quantitative ROI analyses performed on 38 prostate cancer patients revealed estimated T2 values of 80.4 ± 14.4 ms and 106.8 ± 16.3 ms for tumor and non-tumor regions, respectively. ROI measurements showed a significant difference between tumor and non-tumor regions of the estimated T2 maps (P < 0.001). In the two-timepoints active surveillance cohort, patients defined as progressors exhibited lower estimated T2 values of the tumor ROIs at the second time point compared to the first time point. Additionally, the T2 difference between two time points for progressors was significantly greater than that for non-progressors (P = 0.010). Conclusion: A deep learning method was developed to estimate prostate T2 maps retrospectively from clinically acquired T1- and T2-weighted images, which has the potential to improve prostate cancer diagnosis and characterization without requiring extra scans.

2.
Magn Reson Med ; 90(4): 1672-1681, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37246485

RESUMO

PURPOSE: To develop a deep learning method to synthesize conventional contrast-weighted images in the brain from MR multitasking spatial factors. METHODS: Eighteen subjects were imaged using a whole-brain quantitative T1 -T2 -T1ρ MR multitasking sequence. Conventional contrast-weighted images consisting of T1 MPRAGE, T1 gradient echo, and T2 fluid-attenuated inversion recovery were acquired as target images. A 2D U-Net-based neural network was trained to synthesize conventional weighted images from MR multitasking spatial factors. Quantitative assessment and image quality rating by two radiologists were performed to evaluate the quality of deep-learning-based synthesis, in comparison with Bloch-equation-based synthesis from MR multitasking quantitative maps. RESULTS: The deep-learning synthetic images showed comparable contrasts of brain tissues with the reference images from true acquisitions and were substantially better than the Bloch-equation-based synthesis results. Averaging on the three contrasts, the deep learning synthesis achieved normalized root mean square error = 0.184 ± 0.075, peak SNR = 28.14 ± 2.51, and structural-similarity index = 0.918 ± 0.034, which were significantly better than Bloch-equation-based synthesis (p < 0.05). Radiologists' rating results show that compared with true acquisitions, deep learning synthesis had no notable quality degradation and was better than Bloch-equation-based synthesis. CONCLUSION: A deep learning technique was developed to synthesize conventional weighted images from MR multitasking spatial factors in the brain, enabling the simultaneous acquisition of multiparametric quantitative maps and clinical contrast-weighted images in a single scan.


Assuntos
Aprendizado Profundo , Humanos , Encéfalo/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador/métodos
3.
Comput Biol Med ; 155: 106462, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36857942

RESUMO

Automatic segmentation of skin lesions is crucial for diagnosing and treating skin diseases. Although current medical image segmentation methods have significantly improved the results of skin lesion segmentation, the following major challenges still affect the segmentation performance: (i) segmentation targets have irregular shapes and diverse sizes and (ii) low contrast or blurred boundaries between lesions and background. To address these issues, this study proposes a Gated Fusion Attention Network (GFANet) which designs two progressive relation decoders to accurately segment skin lesions images. First, we use a Context Features Gated Fusion Decoder (CGFD) to fuse multiple levels of contextual features, and then a prediction result is generated as the initial guide map. Then, it is optimized by a prediction decoder consisting of a shape flow and a final Gated Convolution Fusion (GCF) module, where we iteratively use a set of Channel Reverse Attention (CRA) modules and GCF modules in the shape flow to combine the features of the current layer and the prediction results of the adjacent next layer to gradually extract boundary information. Finally, to speed up network convergence and improve segmentation accuracy, we use GCF to fuse low-level features from the encoder and the final output of the shape flow. To verify the effectiveness and advantages of the proposed GFANet, we conduct extensive experiments on four publicly available skin lesion datasets (International Skin Imaging Collaboration [ISIC] 2016, ISIC 2017, ISIC 2018, and PH2) and compare them with state-of-the-art methods. The experimental results show that the proposed GFANet achieves excellent segmentation performance in commonly used evaluation metrics, and the segmentation results are stable. The source code is available at https://github.com/ShiHanQ/GFANet.


Assuntos
Dermatopatias , Humanos , Pele , Benchmarking , Software , Processamento de Imagem Assistida por Computador
4.
Magn Reson Med ; 87(1): 488-495, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34374468

RESUMO

PURPOSE: To develop a deep-learning-based method to quantify multiple parameters in the brain from conventional contrast-weighted images. METHODS: Eighteen subjects were imaged using an MR Multitasking sequence to generate reference T1 and T2 maps in the brain. Conventional contrast-weighted images consisting of T1 MPRAGE, T1 GRE, and T2 FLAIR were acquired as input images. A U-Net-based neural network was trained to estimate T1 and T2 maps simultaneously from the contrast-weighted images. Six-fold cross-validation was performed to compare the network outputs with the MR Multitasking references. RESULTS: The deep-learning T1 /T2 maps were comparable with the references, and brain tissue structures and image contrasts were well preserved. A peak signal-to-noise ratio >32 dB and a structural similarity index >0.97 were achieved for both parameter maps. Calculated on brain parenchyma (excluding CSF), the mean absolute errors (and mean percentage errors) for T1 and T2 maps were 52.7 ms (5.1%) and 5.4 ms (7.1%), respectively. ROI measurements on four tissue compartments (cortical gray matter, white matter, putamen, and thalamus) showed that T1 and T2 values provided by the network outputs were in agreement with the MR Multitasking reference maps. The mean differences were smaller than ± 1%, and limits of agreement were within ± 5% for T1 and within ± 10% for T2 after taking the mean differences into account. CONCLUSION: A deep-learning-based technique was developed to estimate T1 and T2 maps from conventional contrast-weighted images in the brain, enabling simultaneous qualitative and quantitative MRI without modifying clinical protocols.


Assuntos
Aprendizado Profundo , Encéfalo/diagnóstico por imagem , Substância Cinzenta , Humanos , Imageamento por Ressonância Magnética , Razão Sinal-Ruído
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...